Most published and unpublished dissertations should be excluded from meta-analyses: comment on Moyer et al.
نویسندگان
چکیده
Moyer et al. [1] systematically collected published and unpublished dissertations evaluating psychosocial interventions for cancer patients and examined the methodological quality of these studies. They concluded that because published and unpublished dissertations differ little in methodological adequacy, the inclusion of unpublished dissertations in meta-analyses is desirable in order to avoid publication bias. Instead of focusing explicitly on quality, however, the Moyer et al.’s [1] analysis combined criteria intended to reflect adequate reporting of results from trials with criteria that indeed reflect quality, or the likelihood that trial results reflect underlying clinical realities. For instance, their first four criteria concern whether the dissertation reported the number of participants approached for consent to participate in a study; reported the number initially participating; reported comparisons of participants to patients approached, but not participating; and reported the number dropping out of treatment. A dissertation could have earned a perfect score on these ratings for reporting this information, despite reporting extremely low uptake and low retention of participants, which would suggest a high probability of bias and potentially poor external and internal validity. Nonetheless, the evidence reported by Moyer et al. [1] makes it clear that both published and unpublished dissertations are generally of poor quality. Dissertations with at least 10 patients per cell were included in their analyses, with mean numbers of initial participants per cell of 37.7 (SD5 42.5) in published dissertations and 29.4 (SD5 23.94) in unpublished dissertations. Moyer et al. do not indicate how many small, grossly underpowered studies were included in either the published or unpublished dissertations. However, the mean cell sizes and large standard deviations for mean participants per cell suggest a substantial number. The problems posed by studies with small cell size are not widely appreciated [2]. As demonstrated by Kraemer et al. [3], the inclusion of small, underpowered trials in meta-analyses results in substantially overestimated effect estimates due to confirmatory publication bias. Statistical correction is impossible with a proportionately large number of underpowered studies. To achieve 80% statistical power to detect a moderate effect size (e.g., d5 0.50), 64 patients would need to be randomized to each of the intervention and control groups. A small study of 20 patients per group would have only 34% power to detect a moderate effect size. With 20 patients per group, a fairly large effect size of 0.65 would be needed just for statistical significance. The problem is even worse than that; however, as small studies with true null effects that cross the po0.05 threshold do it by varying degrees. With 20 patients per group and a true null effect, the expected standardized effect size in a meta-analysis of statistically significant trials would be 0.90–1.00. Thus, albeit counter-intuitive, grossly underpowered studies with positive results, including most published and unpublished dissertations, are most often false positives. Cuijpers et al. [4] recently showed that, when only high-quality studies were considered, the effect estimates for psychotherapy for depression decreased from large (d5 0.74) to small (d5 0.22). Quality criteria included sample size, use of intention-to-treat analyses, independent randomization, utilization of treatment manuals, and treatment integrity. Of the studies reviewed by Moyer et al., 17% of published dissertations and 38% of non-published dissertations were not randomized trials at all; only 12 and 6% of published and unpublished dissertations, respectively, used intentto-treat analyses; only 11 and 18% described a specific method of randomization and measures to prevent subterfuge; fewer than half in either group used treatment manuals; and only 67 and 49% monitored intervention implementation. Most studies reported that they assessed baseline equivalence, but Moyer et al. do not report whether or not this was achieved. Indeed, findings of baseline equivalence of intervention and control groups, based on the lack of statistically significant differences, are often meaningless with small studies because there is too little power to detect differences that may be individually or collectively decisive in determining the outcome of a trial. The literature concerning psychosocial interventions for cancer patients has been shown to have serious shortcomings in terms of methodology [5] and clinical and statistical heterogeneity [6]. The pervasiveness of these problems raises concerns about whether studies should automatically be included in meta-analyses based simply on their availability [2] or even whether a summary estimate
منابع مشابه
Some Notes on Critical Appraisal of Prevalence Studies; Comment on: “The Development of a Critical Appraisal Tool for Use in Systematic Reviews Addressing Questions of Prevalence”
Decisions in healthcare should be based on information obtained according to the principles of Evidence-Based Medicine (EBM). An increasing number of systematic reviews are published which summarize the results of prevalence studies. Interpretation of the results of these reviews should be accompanied by an appraisal of the methodological quality of the included data and studies. The critical a...
متن کاملGrey literature in systematic reviews: a cross-sectional study of the contribution of non-English reports, unpublished studies and dissertations to the results of meta-analyses in child-relevant reviews
BACKGROUND Systematic reviews (SRs) are an important source of information about healthcare interventions. A key component of a well-conducted SR is a comprehensive literature search. There is limited evidence on the contribution of non-English reports, unpublished studies, and dissertations and their impact on results of meta-analyses. METHODS Our sample included SRs from three Cochrane Revi...
متن کاملEffect of reporting bias onmeta-analyses of drug trials: reanalysis of meta-analyses OPEN ACCESS
Objective To investigate the effect of including unpublished trial outcome data obtained from the Food and Drug Administration (FDA) on the results of meta-analyses of drug trials. Design Reanalysis of meta-analyses. Data sources Drug trials with unpublished outcome data for new molecular entities that were approved by the FDA between 2001 and 2002 were identified. For each drug, eligible syste...
متن کاملMeta-analysis of cytomegalovirus seroprevalence in volunteer blood donors and healthy subjects in Iran from 1992 to 2013
Objective(s):Human cytomegalovirus (CMV), a double-strand DNA herpesvirus, can be transmitted via blood transfusion which is especially important for immunocompromised recipients and can cause a fatal infection. CMV seroprevalence in Iran was studied on blood donors, healthy subjects, and some patients. Highly variable rates were detected. The purpose of this study was to review CMV seroprevale...
متن کاملComment on: Pan et al. Bidirectional Association Between Depression and Metabolic Syndrome: A Systematic Review and Meta-analysis of Epidemiological Studies. Diabetes Care 2012;35:1171–1180
P an et al. (1) recently concluded a meta-analysis on the relationship between depression and metabolic syndrome (MetS), targeting 29 cross-sectional and 11 cohort studies. They calculated the pooled odds ratio using random-effects models. Then, they speculated the mechanism of the causeeffect relationship between the two. But before accepting their comments, I want to point out the fragility o...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Psycho-oncology
دوره 20 2 شماره
صفحات -
تاریخ انتشار 2011